DOC 1.1 -- MATH directory documentation

NOTE!  This is a plain ASCII text file containing multiple
documents. You may find it most convenient to view or print this file
by running the DOC.EXE program (supplied on this disk) on your PC.
This is the first Goodies Disk to do it this way.  Hope you like it.

:GD9
:Mathematical Goodies
:-jkh-
@@AREA       SG
(Comp.sys.hp48)
Item: 2753 by an6602@anon.penet.fi [Dewayne Cushman]
Subj: AREA: calculates the area of a polygon with known
      coordinates.
Date: 28 Jan 1993

Six programs are in the AREA directory. These are used
to find the area of any polygon.

When extracted the dir AREA contains 6 programs:

1.  AREA - main program

2.  ####### - degree symbols for neatness

3.  LST - The last matrix that was entered

4.  ###### - degree symbols for neatness

5.  ##### - degree symbols for neatness

6.  HELP - Hit HELP for instruction (just enter the
    matrix (use MATRIX on HP is handy) and store in the
    LST variable (LS+LST). Hit enter a 2nd time to see
    more help. Then hit AREA to get the area or just hit
    AREA with the matrix on the stack.

    e.g.
    For 0 0 0 10 10 10 10 0    -- 2 by n+1 matrix
                               (depends on the polygon)
                               this is a 10x10 box [[0 0]
                               [0 10] [10 10] [10 0] [0
                               0]]. NOTE: the first
                               point is repeated.

    1:            100          -- In square feet if that
                               is the units you used.

BIO - Dewayne Cushman wrote this program(s).

      We are both CE students at OU. Ken will graduate
      in Spr.'93.

These programs are what I (Kenneth Hobson) use in my work
for the OKlahoma Department of Transportation (County
Bridge - Norman). They would have been nice to have had
when I took SURVEY and other math courses.

Moidify anything you like as I wrote these and posted
them for all to enjoy.

Regards,  Kenneth Hobson
internet  at
krhobson@midway.ecn.uoknor.edu

And some local BBS's such as protoboard bbs 405-275-6827
@@CHEBY      SG
Chebychev Polynomial Coefficients, by Ron Zukerman

for HP 48 S/SX/G/GX

(Comp.sys.hp48)
Item: 2303 by Fallonjb95%cs14@cadetmail.usafa.af.mil
              [Joshua B. L. Fallon]
Subj: need help with algorythm
Date: 21 Oct 1993

I only recently got my HP48gx, and am confused on how to
code this algorythm that I have.  What I need to do, is
use this to determine Chebyshev polynomials. Then, take
the polynomial and put it into the standard polynomial
'list' form. (ie. {1 4 6 4 1}). I would have it in that
form to make it easier to change it to a transfer
function.( for EE). Any help at all would be a big help.
I need to be able to get only the nth polynomial.

C0(w) = 1
 
C1(w) = w
.
.
.
C  (w) = 2w*C (w) - C   (w)
 n+1         n       n-1

Thanks in advance for any help.
******************************************
*      `` `                -----         *
*  -= RAINMAN =-             |           *
*            Blue         \ / \  /       *
*              Skies...     \  /         *
* FALLONJB95%CS14@CADETMAIL.USAFA.AF.MIL *
******************************************

----------
Resp: 1 of 1 by ronzu@comm.mot.com [Ron Zuckerman]
Date: 21 Oct 1993

I only have a HP48SX, so I don't know what a standard
polynomial list form is supposed to look like. However,
here is a program to calculate the coefficients of a
Chebyshev polynomial:

<< -> N
    << [ 1 ]
       IF N 0 > THEN [ 0 1 ]
           IF N 1 > THEN
               3 N 1 + FOR I
                   0 OVER OBJ-> DROP I ->ARRY 2 * 3 PICK
                   OBJ-> DROP 0 0 I ->ARRY - ROT DROP
               NEXT
           END
           SWAP DROP
       END
       OBJ-> OBJ-> DROP ->LIST
    >>
>>

To use the program just put the value of N on the stack
and run this program. The results are returned in a list
whose structure is as follows:

{ c0 c1 c2 ... cN }

where: Cn(w) = c0 + c1*w + c2*w^2 + ... + cN*w^N

Ron Zuckerman, KA4RPD  E-mail: ronzu@isp.csg.mot.com
Phone: (708)523-7805   FAX: (708)523-7847
@@CUFIT      SG

Subj: Polynomial Curvefit
Author: Joe Muccillo
Date: 24 Sept. 1993


[Requires the MATRIX library. -jkh-]

[Note: This is a statistical "best fit" program, not
 necessarily a mathematically exact fit. For
 mathematical work, see PFIT. -jkh-]

From the looks of some of the programs on the Goodies
disks it would appear that just about everyone has had a
go at writing one of these programs. Here is my attempt
which may be of some use.

Load the CUFIT subdirectory onto your HP48. The two
program files are "FIT" and "PLOT". The remaining
variables are either input or output variables. A
description of each of the files or variables is given
below. As with most of my programs you will need a copy
of my matrix utilities library, to be found somewhere on
this disk, in order to run it.
     
     FIT - Fits a polynomial of specified order to the X
           Y data contained in the statistics data
           matrix, "SumDAT". The program is interactive.
           All you have to do is run the program and
           enter the degree of polynomial when prompted.
           Of course, the degree of the polynomial must
           be less than N-1, where N is the number of
           data points. On successful completion the
           program stores information into the "COEF",
           "YNEW" and "RESI" variables.

   PLOT  - Performs the following functions:

               = Takes the matrix of coefficients from
                 the "COEF" variable and forms the
                 equation of the polynomial, storing it
                 in the "EQ" variable.

               = Draws a scatter plot of the data
                 contained in the "SumDAT" matrix.

               = Superimposes on the scatter plot a plot
                 of the polynomial contained in the "EQ"
                 variable.

    COEF - Contains a one column matrix containing the
           coefficients of the polynomial starting with
           the constant term in the first row and the
           coefficient of X^(N-1) in the final row.

    YNEW - Contains the new values of the dependent
           variable, Y from the polynomial curve
           corresponding to the values of the independent
           variable, X in the "SumDAT" variable.

    RESI - Contains the resultant residuals, ie the
           difference between the new and old Y values.


Successful execution of the FIT program also leaves one
number on the stack. This value, tagged "SUMSQ", is the
sum of the square of the resultant residuals and has been
included to give some idea of how well the polynomial of
specified degree fits the data. On its own this value
doesn't mean much but it is useful in comparing how well
polynomials of varying degrees fit the given data.
Running the plot program is also useful in seeing how
good the fit is.

I have noticed one problem with the program which I
haven't been able to resolve. When fitting polynomials
of high degree to several data points the "SUMSQ"
quantity actually increases with increasing degree of
polynomial. The plot also shows that the polynomial
curve does not fit the data very well. I initially
attributed this to numerical problems resulting from
insufficient significant figures used in calculations
particularly when high degree polynomials are used. Has
anyone else had this problem and if so has anyone been
able to solved it.



Joe Muccillo
66 Prospect St 
Pascoe Vale, 3044
Melbourne
AUSTRALIA

@@ERRF       SG
(Comp.sys.hp48)
Item: 56 by vsteiger@ubeclu.unibe.ch [Rudolf von Steiger]
Subj: Erf, Erfc
Date: 22 Feb 1993

Erf and erfc can be written as

    -> x  1 0 .5 x UTPN 2 * -  
   'ERF' STO

    -> x  0 .5 x UTPN 2 *  
   'ERFC' STO

        ruedi

Dr. Rudolf von Steiger, Physikalisches Institut,
University of Bern,
Sidlerstrasse 5, 3012 Bern (Switzerland) 
Phone: ++41 (0)31 65 44 19; Fax: ++41 (0)31 65 44 05
RFC: vsteiger@phim.unibe.ch  
X.400: S=vsteiger;OU=phim;O=unibe;P=switch;a=arcom;c=ch
HEPNET/SPAN: 20579::49203::vsteiger
DECnet (Switzerland): 49203::vsteiger
*********************************************************
@@GEOMETRY   SG

Subj: Geometry programs
Author: Joe Muccillo
Date: 23 Sept 1993



Here is a compilation of geometry programs which I have
placed in a library labelled ( 801: GEOMETRY ). I have
found a great number of uses for SIMPS, AG and IBOX in
surveying and engineering applications and would be
interested to know whether anyone else can find some
other interesting uses for them. Please note that none
of these programs are guaranteed in any way against
defect so use them at your own risk.


TRIA - TRIANGULAR GEOMETRY ANALYSIS

SIMPS - AREA CALCULATION USING SIMPSON'S RULE

AG - AREA CALCULATION USING TRAPEZOID RULE

IBOX - GENERAL AREA PROPERTY ANALYSIS

ISEG - AREA PROPERTIES FOR CIRCULAR SEGMENT

VSEG - VOLUME ENCLOSED BY CIRCULAR SEGMENT AND
       SLOPING PLANE

VCON - VOLUME OF A CONE

VSPH - VOLUME OF A SPHERE

SCON - SURFACE AREA OF A CONE

SSPH - SURFACE AREA OF A SPHERE                              

CINT - CIRCLE INSCRIBED IN TRIANGLE

INVER - SUBROUTINE USED WITH IBOX

EQ1,EQ2,EQ3 - SUBROUTINES USED WITH TRIA




TRIA - TRIANGULAR GEOMETRY ANALYSIS

INPUT

"TRIA" is a program used to evaluate side lengths, angles
and the enclosed area of a triangle given certain
information. The information required in order to
successfully run the program can be any of the following:

- 3 side lengths, zero angles.
- 2 side lengths, 1 angle.
- 1 side length, 2 angles.

When run the program prompts: 

             "ENTER A B C   
             (0 IF UNDEFINED)"

Where A:       Side length A
      B:       Side length B
      C:       Side length C
      [alpha]: Angle at intersection of sides B and C
      [beta]:  Angle at intersection of sides A and C 
      [delta]: Angle at intersection of sides A and B

Enter 0 for the items you wish the program to evaluate or
which are undefined ensuring that the input requirements
listed above are satisfied. The program will evaluate
all undefined items in addition to the area enclosed by
the triangle.

Note:  Avoid entering redundant information at the prompt
as this may lead to unsuccessful or incorrect program
execution. The above list shows the minimum information
required to uniquely define the geometry of the triangle.
Any additional information is redundant and should be
omitted.


OUTPUT

A single output screen is obtained listing all quantities
A, B, C, [alpha], [beta], [delta] and the AREA.


USING "TRIA" WITH QUADRILATERALS

It is possible to indirectly apply "TRIA" to obtain
solutions for the area and internal angles of
quadrilaterals. The minimum required information in
order to uniquely define a quadrilateral is the lengths
of all sides in addition to 1 internal angle. The
procedure for evaluating the area bounded by the
quadrilateral and the remaining internal angles is
described below.

1. Divide the quadrilateral into 2 triangles with the
   single defined angle forming the apex of one of the
   triangles and the diagonal dividing the 2 triangles.

2. Using the known side lengths and defined angle
   evaluate the area of triangle No. 1 and the length of
   the diagonal using the "TRIA" program. This step also
   evaluates the part of the internal angles of the
   quadrilateral associated with triangle No. 1.

3. With the diagonal length having been evaluated, and
   using the remaining side lengths, evaluate the area
   and associated internal angles of triangle No. 2 using
   the "TRIA" program.
 
4. Add the two areas together to obtain the total area of
   the quadrilateral and add the internal angles of each
   triangle, where appropriate, to evaluate the internal
   angles of the quadrilateral.

   
SIMPS - AREA CALCULATION USING SIMPSON'S RULE

The program "SIMPS" evaluates the area of a set of N data
points using Simpson's Rule. In addition to the area the
program also calculates the horizontal centroid of the
data. The data is entered into a single column matrix
containing the vertical ordinates of the equally spaced
data points. There must be an ODD number of EQUALLY
SPACED points for Simpson's Rule to apply. This matrix
is placed in level 2 of the stack and the equal
horizontal spacing in level 1.

The output consists of two numbers:

     - The Area in stack level 2
     - The X centroid of area measured from the first
       data point in the input matrix assuming this point
       has an X value of zero.


Using Simpson's Rule the area of a set of N data points
is given by the following formula where N is odd and the
horizontal spacing, h, between points is equal.


     Area = (y1+4*y2+2*y3+2*y4+2*y5+....+4*yN-1+yN)*h/3


AG - AREA CALCULATION USING TRAPEZOID RULE

The program "AG" evaluates the area below a curve defined
by a set of X Y data points assuming the individual data
points are joined by a straight line. This is a
variation on the trapezoid rule which permits the X
spacing between data points to vary. The X Y coordinates
of the points are entered into a two column array with
the X values in the first column and Y values in the
second. The X values must be in ascending order. In
addition to the area the program also calculates the
horizontal centroid of the data and the overall range of
the data points specified. The matrix of coordinates is
entered into level 1 of the stack.

The output consists of two numbers:

     - The Area in stack level 2
     - The X centroid of area
     - The range of data in the input matrix
           = Xn - X1




IBOX - GENERAL AREA PROPERTY ANALYSIS

This program will calculate the area "A", second moments
of area "Ixx" and "Iyy", the product of area "Ixy" and
the centroid "x" and "y" for an arbitrary shape. Areas
containing voids can also be handled. The analysis
method is based on contour integration principles.
Although this program can be used in a variety of
applications I originally developed it as a cross section
property analysis program for use in structural
engineering. For example, it can be used to assess the
section properties of a multi celled box girder.

The analysis method involves firstly establishing an
arbitrary and convenient coordinate axis system from
which coordinates of points around the area are
referenced. Labelling of coordinates proceeds using the
following criteria:

     - Coordinates are labelled in a clockwise fashion
       around the perimeter of the area and anticlockwise
       within a void.

     - A coordinate should be defined at the end of each
       straight line segment.

     - Where curved geometries are being analysed the
       curve is divided into a series of straight line
       segments with nodes at each end. The more
       straight line segments are defined the more
       accurate are the results. Note that a geometry
       consisting of straight line segments only will
       produce exact results.

     - The final coordinate should be equal to the first
       coordinate so as to produce a closed section.
       Note that this rule can be waived when the y value
       of the last coordinate is the same as that of the
       first.

Data is entered into a two column array with x and y
coordinates of successive points stored in the first and
second columns respectively. This array is then placed
in level 1 of the stack and the program executed.

When executed the program will initially prompt for
whether you wish to evaluate the product of area Ixy.
This is a most useful quantity particularly if you are
trying to establish whether the axis about which the
properties are being evaluated is a principle axis for
the section. A principle axis is one for which Ixy = 0.
An important restriction is placed on the use of "IBOX"
to evaluate Ixy and that is that successive coordinates
must not have the same x value. If this is the case and
you reply yes to evaluation of Ixy then a divide by zero
error will occur during program execution which will halt
the program with an 'ERROR' message. To avoid this the
input data should be checked to see whether a case arises
where successive coordinates do have the same x value. If
so then alter one of the x values by a very small amount
so as not to change the geometry too much, but enough to
ensure that a divide by zero error does not occur. Under
normal circumstances Ixy will not be required so that
entering any number other than zero will not invoke this
calculation.

Upon successful program execution a single screen of
labelled output is obtained containing the quantities
described above.


ISEG

The ISEG program evaluates the area, centroid and second
moment of area of a circular segment given the radius of
the circle and depth of segment. The second moment of
area is calculated about the centroidal axis of the
segment.

The program takes 2 numbers from the stack:

     Level 2 - radius of circle.

     Level 1 - depth of segment.

and returns the following:

     Level 3 - Area of segment.

     Level 2 - Centroid of segment measured from top of        
               circle.

     Level 1 - Second moment of area of segment.




VSEG

VSEG is a program which calculates the volume under an
inclined plane which intersects a horizontal circular
area. This is equivalent to the volume of a circular
segment with varying height around the circumference of
the circle and a height of zero along the straight line
boundary of the segment. VSEG also calculates the
centroid of the volume measured from the top of the
circular segment.

INPUT:    Level 3 - radius of circle.

          Level 2 - depth of segment.

          Level 1 - Max. height of plane above segment.

OUTPUT:   Level 2 - Volume.

          Level 1 - Centroid.
VCON - VOLUME OF A CONE

Calculates the volume of a right cone. The base radius
and perpendicular height are entered in levels 2 and 1 of
the stack respectively.


VSPH - VOLUME OF A SPHERE

Calculates the volume of a sphere given the radius in
level 1 of the stack.


SCON - SURFACE AREA OF A CONE

Calculates the total surface area of a right cone
including the base area. The input is as for "VCON".


SSPH - SURFACE AREA OF A SPHERE

Calculates the total surface area of a sphere given the
radius in level 1 of the stack.


CINT - CIRCLE INSCRIBED IN TRIANGLE

Calculates the radius of a circle inscribed within a
triangle whose side lengths are specified in levels 1, 2
and 3 of the stack. The program returns the radius to
level 1. I have taken this program directly from "HP48
INSIGHTS Part 1: Principles and Programming" by William
C. Wickes.



Joe Muccillo
66 Prospect St
Pascoe Vale, 3044
Melbourne
AUSTRALIA

@@G/GX Matrices
This is a "thread" from comp.sys.hp48 regarding the G/GX.

 Ŀ
  G SERIES HAS BETTER INTERNAL MATRIX ACCURACY THAN SX 
 
                    by Joseph K. Horn

WOW, check this out. HP improved the ACCURACY of the
matrix functions in the G series!  Why don't they
advertise the fact?  This is great!

Try out these examples.

The inverse of

[[ 58 59 ]
 [ 68 69 ]]

(obtained by pressing 1/X) is supposed to be exactly

[[ -6.9  5.9 ]
 [  6.8 -5.8 ]]

and that's what the GX gets. And if you then invert
that, you get the original back, as you should.

But the SX gets:

[[ -6.89999999197  5.89999999314 ]
 [  6.79999999211 -5.79999999326 ]]

Lots of wrong digits at the end of each element.
Re-invert, and you get:

[[ 58.0000000022 59.0000000020 ]
 [ 68.0000000027 69.0000000023 ]]

instead of the original.

Even if you key in the correct inverse listed above, and
press 1/X on that, you *still* don't get the original
back. The SX just doesn't have the accuracy of the GX.

Other examples to try:

[[  79  94 ]       [[  25 -18.8 ]
 [ 105 125 ]]           [ -21  15.8 ]]

[[  3 101 ]        [[ -36.7 10.1 ]
 [ 11 367 ]]            [   1.1  -.3 ]]

GX gets these right. SX doesn't.

Here's a fascinating and unexpectedly elegant proof of
the GX's superiority, brought to my attention by Rodger
Rosenbaum. Hilbert matrices are horribly ill-conditioned
and are a good test of a machine's accuracy. The 4x4
Hilbert matrix has this form:

[[ 1/1 1/2 1/3 1/4 ]
 [ 1/2 1/3 1/4 1/5 ]
 [ 1/3 1/4 1/5 1/6 ]
 [ 1/4 1/5 1/6 1/7 ]]

It can be obtained by running this 'HILBERT' program:

HILBERT, by Joe Horn; n  n-by-n Hilbert matrix

 -> n
   1 n
    FOR j j DUP n + 1 -
      FOR k k INV
      NEXT
    NEXT { n n } ->ARRY
  


It has this as its exact inverse:

[[   16  -120   240  -140 ]
 [ -120  1200 -2700  1680 ]
 [  240 -2700  6480 -4200 ]
 [ -140  1680 -4200  2800 ]]

which I just verified using the Derive program. But if
you invert the 4x4 Hilbert matrix on your HP48, you'll
see elements that differ from the expected inverse listed
above. Until yesterday I'd have attributed this just to
round-off errors accumulating during the internal
calculation of the inverse.

*** WRONG! ***   That's NOT primarily what's happening.
Here's proof.

Here's what the SX gets:

16.0000000111 -120.000000119 240.000000282 -140.000000182
-120.000000124 1200.00000133 -2700.00000317 1680.00000205
240.000000302 -2700.00000326 6480.00000779 -4200.00000504
-140.000000199 1680.00000216 -4200.00000516 2800.00000334

Lots of wrong digits. But do the same process on the
"more accurate" GX and you get even worse-looking
results:

16.0000000325 -120.000000365 240.000000882 -140.000000575
-120.000000365 1200.00000411 -2700.00000995 1680.00000649
240.000000882 -2700.00000995 6480.00002408 -4200.00001572
-140.000000576 1680.00000650 -4200.00001572 2800.00001027

If the GX is more accurate, why are its results all worse
than the SX's results?  The answer is that the HP48 NEVER
CONTAINED the Hilbert matrix. For example, the third
element is supposed to be 1/3, right? But the HP48 only
had the first 12 digits of 1/3 (0.333333333333) in that
position, not 1/3. Thus we need to test a new
hypothesis: could the GX result actually be the CORRECT
answer, given that the values in the matrix were rounded
off to 12 places BEFORE the inverse was calculated?

To test this, I typed the following into Derive, setting
the calculation accuracy to 20 places and the notation to
12 digits:

1.00000000000 .500000000000 .333333333333 .250000000000
.500000000000 .333333333333 .250000000000 .200000000000
.333333333333 .250000000000 .200000000000 .166666666667
.250000000000 .200000000000 .166666666667 .142857142857
.200000000000 .166666666667 .142857142857 .125000000000

That is what you REALLY get when you run 'HILBERT' on the
HP48, not the idealized matrix of reciprocals listed
above.

Inverting this in Derive yields what the HP48 SHOULD get:

16.0000000325 -120.000000366 240.000000884 -140.000000576
-120.000000366 1200.00000412 -2700.00000997 1680.00000651
240.000000884 -2700.00000997 6480.00002413 -4200.00001575
-140.000000576 1680.00000651 -4200.00001575 2800.00001029

Compare this to what the GX gets!  Almost the same!  The
very worst error is only 5 off in the 12th place. So the
"errors" that we mistakenly attributed to faulty
calculations were in fact due to faulty inputs, and our
hypothesis is validated. The SX, however, has errors
propagating all the way up to the 9th decimal place. The
GX therefore is superior to the SX in matrix math
accuracy.


jurjen@q2c.nl [Jurjen N.E. Bos] writes:

> This is indeed remarkable. It looks like they have
> using a special super-long type (like 24 digits instead
> of 15) for the internal multiplications. Did anyone
> check the speed?

SX, normal INV of 4x4 Hilbert: 0.895_s
GX, normal INV of 4x4 Hilbert: 0.645_s

SX, Jurjen's RSD improvement : 2.427_s
GX, Jurjen's RSD improvement : 1.685_s

Both are roughly 30% faster for the GX.

> By the way: [using RSD] will also [work] on the G(X),
> and probably improve the accuracy to even higher
> standards. Can anyone with a G(X) handy check this out?

Using RSD on the GX yields identical results as on the SX
for a 4x4 Hilbert, but begins to differ already for a
5x5. Using Toby's method to calculate the absolute error
gives these "number of wrong digits" values for the
Hilbert matrices of various sizes:

TABLE OF "NUMBER OF WRONG DIGITS" USING TOBY'S METHOD

Hilbert          --GX--              --GX-- INV plus
 Order         INV alone             one RSD iteration

   4                  28                         5
   5               1,192                       978
   6               8,769                    58,705  <--??
   7           1,925,587                 1,885,921
   8         120,327,605                59,548,237
   9       2,968,381,031               939,982,230
  10      37,012,693,951            17,062,216,102

Hilbert          --SX--              --SX-- INV plus
 Order         INV alone             one RSD iteration

   4               9,628                         5
   5             417,984                       691  <--!!
   6           7,164,844                    39,383  <--!!
   7          33,256,221                 1,842,570  <--!!
   8         464,357,873                53,018,430  <--!!
   9      43,094,094,759               997,947,848
  10   2,833,599,816,259           208,556,942,563

Calculations were done by Derive with precision set to 64
internal digits, truncated to 15 digits, then rounded to
12 digits.

That the GX beats the SX using a naked INV is no
surprise. But there are two surprising things in the RSD
columns. First, using RSD on a 6x6 Hilbert inversion
makes it WORSE on the GX than just using INV alone (the
number of wrong digits rises from 8769 to 58705!).
Second, using RSD on the SX seems to give better final
results than using RSD on the GX more often than not!
What gives?  HP improved INV but not RSD, such that RSD's
inaccuracy can prove counterproductive???  Is using RSD
on a GX INV akin to attaching an antenna to improve the
picture quality on a cable TV?


Reply from paulm@hpcvra.cv.hp.com [Paul McClellan], the
HP team member responsible for improving the matrix code:

The S/SX computed matrix inverses and system solutions
in-place, computing inner products to 15 digits of
precision but rounding computed quantities to 12 digits
of precision before storing them in the array. RSD
generally improves the SX results because the residuals
are computed to 15 digits of precision, then rounded to
12 digits. RSD can thereby provide useful information
for iterative refinement.

The G/GX computes matrix inverses and system solutions to
full 15 digits of accuracy, rounding only the final
results of the computation. These operations use
15-digit versions of their array arguments internally.
RSD was not changed, and in general it no longer provides
useful information for iterative refinement, as you have
determined empirically.

My experience is that using RSD on the SX will not
generally give better results than "naked" GX results.
As the ill-conditioning of the problem increases,
eventually the three extra digits one recovers via the
SX's RSD is overwhelmed by errors in the 12-digit
computations. Your data illustrates this with Hilbert
matrices of order 9 and 10.

It would have been helpful had we computed the G/GX RSD
to double precision before rounding to 12 digits. This
would have made it useful for iterative refinement. We
ran out of resources and did not do so. However, this
can be done by implementing double precision arithmetic
in user RPL. Two references I have on the subject are:

Dekker, T.J. "A floating-point technique for extending
the available precision", Numer. Math. 18 (1971) 224-242.

Linnainmaa, S. "Software for doubled-precision
floating-point computations", ACM Trans. Math. Soft. Vol
7. No 3 (Sept 1981) 272-283.

[Note: See Digi24 on this disk for an HP48 implementation
 of the algorithms in the first reference above.  -jkh-]
@@Digi24     SG
By: rodger@hardy.u.washington.edu [Rodger Rosenbaum]
Date: 09 Nov 1993

            ͻ
             Double Precision on the HP-48 
            ͼ

    Here are some routines based on a paper that appeared
in Numerische Mathematik in 1971, page 224. The paper
was written by T.J. Dekker and is entitled "A
Floating-Point Technique for Extending the Available
Precision." Dekker's method requires that the single
precision arithmetic used in the machine under
consideration meet rather severe requirements, such as
impeccable rounding. Fortunately, recent HP machines do;
most other calculators do not, so Dekker's method will
probably fail on them. In this method, a double precision
number is represented as two single precision numbers. I
have chosen not to do the obvious and let the two halves
of a complex number on the HP-48 be one double precision
number. Instead, one double precision number is
represented by two single precision numbers on separate
levels of the stack. The reason I have done this is 1:
Because the HP-48 allows a scalar to multiply a vector in
one operation and this greatly simplifies and speeds up
double precision Gaussian elimination, and 2: Because
this way double precision arithmetic can be done on
complex numbers. I do not know that the results are
accurate when using complex numbers in the way that
Dekker proves they are when using reals, but my empirical
testing indicates that they are. I would be glad to hear
of any counterexamples.

                       Ŀ
                        Examples 
                       

A double precision number is represented as two single
precision numbers on the stack. For example, double
precision 2 is represented as:

    2:     2
    1:     0
    
now put double precision 3 on the stack:

    4:     2
    3:     0
    2:     3
    1:     0
    
now multiply them by executing MUL2:

    2:     6
    1:     0
    
A double precision complex number, (1,1) is:

    2:    (1,1)
    1:    (0,0)
    
Take the square root by executing SQR2:

    2:  (1.09868411347,.455089860562)
    1:  (-2.19003396021E-12,2.27341304364E-13)
    
now execute DUP2 MUL2 and see the original (1,1).

Double precision numbers can be typed in conveniently by
just adding a comma (or space) and a zero to insert the
least significant part; for example, 2*3 in double
precision could be done by typing 2,0 ENTER 3,0 ENTER
MUL2.

Notice that after a sequence of operations using double
precision, the double precision result on the stack can
be converted to single precision by just pressing +.
     
     Ŀ
      Description of the Double Precision Routines 
     

SPLT:  This routine splits a single precision number into
two parts as required by Dekker's algorithm.

MUL:  This routine multiplies two single precision
numbers and returns a double precision product. It is
used as an auxiliary routine by MUL2, DIV2, and SQR2.

ADD2, SUB2, MUL2, DIV2:  These routines expect two double
precision arguments on the stack (occupying four levels)
and compute the double precision result suggested by
their names. One of the arguments, but not both, may be
a vector or matrix when MUL2 is executed; the dividend
may be a vector or matrix when DIV2 is used. None of the
arguments may be a vector or matrix when using ADD2 or
SUB2. Complex arguments may be used with all four
routines. Thus, to multiply [1 2 3] x 5 the stack must
be set up like this:

      4:[1 2 3]
      3:[0 0 0]
      2:  5
      1:  0
      
   then execute MUL2 and see the double precision result:
   
      2:[5 10 15]
      1:[0 0 0]
      
SQR2:  This routine computes a double precision square
root. Type in:
  
      2:5
      1:0
      
   Execute SQR2 and see a peculiarity of Dekker's method.
   The result on the stack is:
   
      2:   2.2360679775
      1:  -2.10303590826E-13
      
   notice that the least significant part of the result
   is negative. This is an artifact of Dekker's method,
   which brings me to the next routine.
   
   
SHO2:  This routine converts one double precision real
scalar (note, scalar only) on the stack to two strings on
the stack. The most significant part will be shown in
scientific format; the least significant part will be
shown as twelve digits. If the least significant twelve
digits are inserted just in front of the letter E in the
most significant part, this will be the final 24 digit
result. I have left the two parts separate on the stack
so that they can both be seen without recourse to the
EDIT or VISIT function. Notice that the SHO2 routine
takes care of the possibility that the least significant
part is negative in Dekker's format, as in the square
root of 5 example above. Another thing taken care of by
SHO2 is demonstrated by taking the double precision
square root of 163. Type in:

     2:  163
     1:    0
     
   and then execute SQR2; see on the stack:
   
     2:    12.7671453348
     1: 3.70466171095E-12
     
   now execute SHO2 and see:
   
     2:  "1.27671453348E1"
     1:     "037046617109"
     
This represents the number:  12.7671453348037046617109
Notice that after executing SHO2 it can be seen that
there is a leading zero in the least significant part
that was not obvious before. Also notice that SHO2 does
not properly round the last digit of the least
significant part, but merely truncates it.

SHO2 will not work with a double precision vector on the
stack; an element of the vector (both parts) must be
extracted with GET and then converted with SHO2. An
example of this is the routine LOOK which is designed to
display the results of a double precision Gaussian
elimination; see below. Nor will SHO2 work with complex
numbers directly; the respective real or imaginary most
significant and least significant parts must be extracted
and then SHO2 can be used.

QAD: This routine uses double precision arithmetic to
solve the general quadratic equation, Ax^2+Bx+C. Put the
values A,B,C on the stack and execute QAD. The roots are
given only to single precision, but the discriminant is
computed with double precision; thus the example given in
the last pages of the HP-15 Advanced Functions handbook
is solved correctly.

      Ŀ
       Gaussian Elimination and Back Substitution 
      

It is possible to do a double precision solution of a
linear system with these routines. Assume that on the
stack is a right-hand side vector (or matrix) of
constants and a matrix of coefficients, e.g.:

       2:  [[ 1 ]
            [ 7 ]
            [ 9 ]]
       1:  [[ 1 2 3 ]
            [ 7 5 2 ]
            [ 3 5 4 ]]
            
First, these two must be converted to an augmented matrix
by executing AUGM. See:

        6: [ 1 2 3 1 ]
        5: [ 0 0 0 0 ]
        4: [ 7 5 2 7 ]
        3: [ 0 0 0 0 ]
        2: [ 3 5 4 9 ]
        1: [ 0 0 0 0 ]
         
This represents a double precision augmented matrix. Now
execute ELIM and see the results of double precision
Gaussian elimination:

        6: [ 1 .714285714286 .285714285714 1 ]
        5: [ 0 -2.85714285714E-13 2.85714285714E-13 0 ]
        4: [ 0 1 1.1 2.1 ]
        3: [ 0 0 3.5E-24 3.5E-24 ]
        2: [ 0 0 1.3 -2.7 ]
        1: [ 0 0 1.E-23 1.E-23 ]
        
In order to see all 24 digits of the (1,2) element (for
example) of the solution, which has a negative least
significant part, execute: 6 PICK 2 GET 6 PICK 2 GET SHO2
and see:

         3: [ 0 0 1.E-23 1.E-23...
         2:   "7.14285714285E-1"
         1:       "714285714286"
         
Now, to complete the solution, drop the two strings and
execute BACK which will do back substitution. The stack
now contains:

         6: [ 1 0 0 -1.53846153846 ]
         5: [ 0 0 0 -1.53846153845E-12 ]
         4: [ 0 1 0 4.38461538462 ]
         3: [ 0 0 0 -4.6153846154E-12 ]
         2: [ 0 0 1 -2.07692307692 ]
         1: [ 0 0 0 -3.0769230769E-12 ]
         
Now, to see the solution elements as strings, use LOOK.
To see the first element of the solution, type 1 LOOK; to
see the second, type 2 LOOK; etc. Don't forget to drop
the two strings of the previous result before executing
LOOK again.

The routine PeVAL does the same thing as the PEVAL
function in the HP-48GX, but the results are computed and
accumulated in double precision. The final result is
returned as a standard single precision REAL. If a
double precision final result is wanted, delete the final
+ in the routine PeVAL.

                       Ŀ
                        SUMMARY 
                       

ELIM:  Does double precision Gaussian elimination.

BACK:  Does back substitution after elimination.

UNIT:  Divides a row by its pivot element.

RED:   Reduces the remaining rows of the augmented matrix
       by the pivot row.

PIV:   Does partial pivoting

PIVX:  Another partial pivoting routine; does it as shown
       in most textbooks.

EXG:   Exchanges double precision rows.

MAKE:  An auxiliary routine used by AUGM. It creates a
          row vector of all zeroes to represent the least
          significant part of a double precision vector.

GAUS:  Does the whole job in double precision. GAUS
          expects a problem to be set up just the way the
          HP-48 expects it, with a column vector (or
          matrix) on level 2 of the stack and a square
          matrix on level 1, to be solved by pressing the
          divide key.

PeVAL: Evaluates a polynomial. Put the vector of
       coefficients on stack level 2 and the value of the
       independent variable on level 1.


SWP2, OVR2, ROT2, PCK:  Double precision utilities.
@@LAPLACE    SG
(Comp.sys.hp48)
Item: 1357 by meissner@triton.unm.edu [John Meissner]
Subj: Lalpace_jm -- New Version.
Date: 04 Jul 1992

[Note: John makes several references to Wayne Scott's
 POLY software. Details about POLY can be found on
 Goodies Disks 1 and 7. -jkh-]

     These routines are an upgrade to my previous Laplace
transform utility. Many of the new programs have been
rewritten in SYSTEM-RPL. As always, system RPL can cause
some unpredictable things to happen -- including MEMORY
LOST errors.

     I have strived to do my best to make the routines
safe, but I am new to syystem-RPL programming and there
are undoubtedly some bugs still left in the routines. I
am posting these routine to comp.sys.hp48 so that these
bugs can be tested and corrected over the next few weeks
so that I can release some kind of a final product.

   VERY IMPORTANT! DO NOT USE THESE ROUTINES UNLESS YOU
ARE WILLING TO SUFFER A MEMORY LOST!  I have not had any
of these types of problems for a month or so, but they
can happen so beware. I have tested these routines
somewhat extensively and have no trobel to reoprt. If
problems do arise of aby kind a would appreciate people
dropping me a line and letting me fix them. After a few
weeks I will post a more finalized version to
comp.sources.hp48. In the interim, keep the bug reports
rolling in.

         Summary of changes in these routines:

         1) system-RPL, these routines run 5 to 10 times
         faster.

         2) Remove flags [0-5] usage.

         3) Fixed bugs in & and RT-> that caused
         incorrect transforms.

         4) install a patch that allows 't' to exists
         elsewhere in the PATH.

         5) Remove 'e^t' notation.

         6) Recognizes and handles correctly the 'SQ()'
         function.

         7) Many changes in how the routines work that
            speed up or otherwise enhance their
            performance.

     The origional routines used Wanye Scotts polinomial
routines for most of the grunt work. I don't think any
of Wayne's routines are still in this package, but he
deserves credit for the notation and ultimately these
routines since I never would have bothered without his
work.

     Also, Bill Wickes' port of a polynomial root finder
is included in these routines. Thank you Bill for the
efforts.

         The remainder of this article is an edited
version of my original post.

     The following programs are released to the public
domain provided that no money exchange hands (unless of
course my hands are involved)

     A few disclaimers are in order.

     1) The routines in this directory use Wayne Scott's
polynomial root finder routines extensively. Beware --
if you already have these routines and use them, then you
need to read all of these documents to determine what
effects might surface. For the most part, these routines
are entirely different even though they do accomplish
some of the same tasks.

     2) If you are unfamiliar with the format of the
polynomials used by Wayne's routines, here is a brief
refresher:

the polynomial   :       x^4 - 3*x^3 - 2*x -1
is represented as:       { 1 -3 0 -2 -1 }

     This is to say the first element of the list is the
coefficient for the x^4 term and so on through the
equation. It is customary to use 's' for the Laplace
variable, so from here on, I will use 's' in my examples.
   
        Some of the routines require two polynomials on
the stack. For example, the ratio of the two polynomials
representing this expression:

                3*s^5 - 3*s^3 + 5*s^2 - 2
         ---------------------------------------
         s^5 - 7*s^4 + 15*s^3 - 5*s^2 -16*s + 12

would be represented on the stack as:
                   
                   2:  {3 0 -3 5 0 -2}
                   1:  {1 -7 15 -5 -16 12}

Another method of representing the same ratio is:

             2:  {3 0 -3 5 0 -2)
             1:  {3 2 2 1 -1}

     where level 2 contains the same list as the previous
example. However, level 1 contains the roots of the
denominator.

     Hopefully, these examples are clear enough for you
to get the picture.

     The following are descriptions and examples of each
of the functions in the LAPLACE directory:

t

     This variable returns an unevaluated copy of itself.
I allows the routines to operate when ther is another 't'
defined in the current path. It is also useful for keying
is formulae. Since the independant variable for these
routines must be 't'

->L

     This program takes an algebraic object and returns
its Laplace Transform. The input function must be a
function of 't'  that is to say that the only independent
variable in the function must be lower case t. The
functions that it recognizes are constants, exponentials,
sines, cosines, and polynomials. The program should
allow any combinations of multiplication, addition,
subtraction, and integer exponents. Division by non
constants is not supported however. For example:

1: 'COS(2*t)^2'
->L

after a pause will return :

2: { 1 0 2 }              ; numerator 
1: { (0,4) (0,0) (0,-4) } ; roots of the denominator 
 
  
     This represent the transform. With this as an
input, the inverse transform function returns :

L->

1: '1/2 + 1/2*COS(4*t)'

     This doesn't look the same as the original function;
but, the answer is correct. COS(t)^2 = 1/2 +
1/2*COS(2+t) is a trigonometric identity. Kinda nifty
eh?

     Another interesting result of the transformation is
integration. Integration in the time domain is division
by s in the s domain. So, take the function:

1: 'EXP(-2*t)*COS(t)^2'

->L

Returns:

2: {1 4 6 }
1: { (-2,2) (-2,0) (-2,-2) }

     The list on level one is the roots of the
denominator. If you add a zero to the list ( edit the
list or just put a zero on the stack and press the + key
) you are effectively multiplying the denominator by s.
This is the same as dividing the fraction by s. The
result in the time domain, after the inverse transform is
done, is the integration of the original function between
the limits of 0 and 't.' For example:

2: { 1 4 6 }
1: { (-2,2) (-2,0) (-2,-2) 0 }
->L

Returns:

1: '3/8-1/8*EXP(-(2*t))*COS(2*t)+1/8*EXP(-(2*t))*SIN(2*
   t)-1/4*EXP(2*t)'

     This is the integral of the original function
evaluated between 0 and t. In many cases the general
solution can be obtained by replacing the constant (in
this case 3/8) with a general constant of integration.
Needless to say, this method of integration was a little
easier than looking in the integral tables. The
description of the inverse Laplace Transformer follows.

L->
     This program takes the numerator, level 2, and the
denominator, level 1 (factored), and returns its inverse
Laplace Transform. The factors of the denominator need
not be grouped together as Wayne's routines require since
they are sorted before anything is done to them. I
originally wrote this routine and uploaded it to hpcvbbs,
but I did not include much for documentation. Since then
I have seen it posted a few times and have heard of some
bugs that I am unable to duplicate. This may be because
the users are using Wayne's routines with this one, or, I
fixed the bugs and don't remember. Either way here is
the latest version.

     The routine should correctly transform the ratio of
any two polynomials with real coefficients. The only
limitation i know of is that the numerator should be of
lower order than the denominator. You may still get the
correct answer if this condition is not met but there are
no guarantees.

     I have found absolutely no problems with this
routine. In fact, I have found 2 errors in the transform
tables of the "CRC Handbook of Mathematical Formulas.'  I
am not willing to say that there are no bugs because I am
sure they exist. Please let me know as they become
apparent. I have been truly amazed with the results this
program returns;  Wayne did a good job with his routines.
He did almost all of the work. Here is and example:

2: { 2 0 4 }
1: { 1 1 0 4 }
L->

Returns: 

1:'-1 - 2*t*EXP(t) + EXP(4*t)'

RT->

     The results of this routine are different depending
on the number of lists on the stack. If there are 2
lists of numbers on the stack, the routine will assume
that the represent a fraction and the numerator is
divided by the coefficient of the highest order term in
the denominator. This is necessary to keep the ratio
correct since this term is lost in the expasion. In the
event that the stack contains only one list this program
returns the roots of that list sorted.

1: { 1 -6 1 24 -20 }
RT->

returns:
1: { 1 2 -2 5 }

FADD

     This routine adds two partial fractions. They must
be in the form:

4: list    (numerator)
3: list    (denominator roots)
2: list    (second nemerator)
1: list    (second denominator roots)

&

     This routine convolves (sp?) Two partial fractions.
It is used by the routine ->L. It either does a partial
fraction expansion of one of the two terms and then does
an 's' shifted summation of the terms, or is does a
straight multiplication of the terms -- depending on
which is appropriate. It is still written in user-RPL
since there was no real speed increase when I rewrote it.
I believe that user-RPL should be used whenever possible
to avoid as many headaches as possible.


SORT

     Fairly self explanatory. Not the bubble variety.
It is much smaller than the bubble sorts that I tried and
is a little faster for lists that are shorter than about
20 elements. The sort order is largest to smallest with
weighting on the real value. This means that the
unstable roots will be at the begining of the list if
they exist.

RED

     This routine uses PDIV to remove any common roots in
the numerator and denominator. It is slow but necessary
if you are working with complex transforms.

Q
  
     This routine takes an object from the stack and
attempts to rationalize it to within 5 digits of
accuracy. In other words, if you have a nomber like
5.200004543 and want to round it to the nearest fraction
then Q is your program. It is necessary because the
roots returned by the root finder are not exact. And the
calculator drops digits here and there when dividing or
finding roots. The routine will also accept lists,
algebraics, complex numbers, and combinations of any of
the above.

PMUL

     This routine take two polynomials and returns their
products. I have completely rewritten this program to
speed it up and slightly change the function. The
objects do not have to be lists. The routine will now
multiply a list by a constant. It should function
identically to wayne's routines other than this change.

IRT

     This routine take the roots of the polynomial and
multiplies them to find the polynomial.

PADD

     This routine takes two polynomials and adds them.
 In addition, if the object on level 1 is a number (real
 or complex), the routine adds the number to each of the
 elements in the list in the other stack level. These
 changes were made in the interest of speed.

TRIM
 
     This routine takes a list and performs Q and then
strips the leading zeros. It works identically to the
original I believe.

PDIV

     This routine is NOT the same a Wayne's. This
routines accepts a list, representing a polinomial, and a
number, representing a root of the polinomial, and
returns the list with that root divided out and the
remainder which is also the value of the polinomial
evaluated at that root. Example

2:  { 1 -2 1 }
1:  1

PDIV  (returns)

2: 0
1: { 1 -1 }

RDER
 
     This routine takes two lists (numerator,denominator)
and returns the numerator of the derivative of the ratio.

PDER
 
     Same as above except just for one polynomial and
returns the derivative of that polynomial.

PF

     Does partial fraction expansion of the ratio
(numerator, roots) on the stack.

Please let me know id this program is in any way useful
to amy of you. Also please let me know of any
suggestions or bugs that occur.

Good luck,

John Meissner
@@MATRIX     SG

Subj:  Matrix Utilities
Author: Joe Muccillo
Date: 23 Sept. 1993


Here are 10 matrix utility programs which I have found to
be useful in a variety of applications. I have compiled
them into a library ( 800: MATRIX ) in order to make it
easier to access them as subroutines. If anyone has any
queries regarding these programs write to the address
shown below.



GETCO

"GETCO" or GET COLUMN is used to retrieve specified
columns from an array. The columns to be retrieved are
specified in a list with { } dilimiters which is placed
in level 1 of the stack. The array is placed in Level 2.
The output is a matrix containing the listed columns only
and redimensioned to suit. The list of columns to be
retrieved must contain integer values no greater than the
width of the matrix in level 2. Any zero elements in
this list are ignored by the program.

Eg: Level 2    [[  1  3  5  7 ]
                [ 16  2  4 11 ]
                [  4  9 10  3 ]]

    Level 1    { 1 0 3 }

    Result     [[  1  5 ]
                [ 16  4 ]
                [  4 10 ]]

GETRO

This utility is similar to "GETCO" except that it
retrieves rows rather than columns.


KERO

"KERO", short for KEEP ROW, retrieves a range of rows
from a given array. The program takes as its arguments a
matrix from Level 3 and two integers defining the start
and end rows of a range. These are in Levels 2 and 1
respectively.

Eg. Level 3    [[ 10  2 12 ]
                [  2  7  9 ]
                [  4  8 10 ]
                [ 13  9 11 ]]

    Level 2     2
    Level 1     3
    Result     [[ 2  7  9 ]
                [ 4  8 10 ]]

COPRO

"COPRO" will copy a specified row in a matrix to other
rows a specified number of times. This program takes 4
arguments from the stack and returns a single item - the
resultant matrix to Level 1 of the stack. The 4
arguments required are:

 1 - The original matrix to be modified in Level 4, A
 2 - The row number to be copied in Level 3, S
 3 - The number of times this row is to be copied in
     Level 2, G
 4 - The row increment to which this row is to be copied
     in Level 1, I. This value must be +ve.

Eg: INPUT  Level 4 [[ 4 3 6 2]
                    [ 1 1 2 1]
                    [ 5 1 1 8]
                    [ 0 1 3 0]
                    [ 3 0 0 0]]

           Level 3          1
           Level 2          2
           Level 1          2

   OUTPUT  Level 1 [[ 4 3 6 2]
                    [ 1 1 2 1]
                    [ 4 3 6 2]
                    [ 0 1 3 0]
                    [ 4 3 6 2]]

It should be noted that the number of rows in the
resultant matrix will be the same as for the original so
that the following inequality must be satisfied in order
to successfully execute the program.

       S + G * I  <= Number of Rows in Matrix

The restriction on I being positive means that row
generation can only proceed downwards, ie higher row
numbers cannot be copied to lower ones.


SWA

This utility is used to swap the first and second columns
of a two column matrix. The matrix to be modified is
placed in Level 1 of the stack.


INTER

"INTER" is used to linearly interpolate between X and Y
coordinates listed in a 2 column matrix. The matrix of
coordinates is in Level 2 and the X value for which an
interpolated value of Y is required is placed in Level 1.
The program outputs the linearly interpolated value of Y
and the row number in the matrix for which the X value
immediately proceeds the X value requested. The X
coordinates must be in ascending order.


Eg:  INPUT  Level 2  [[  1  5 ]
                      [  6 12 ]
                      [  8  7 ]
                      [ 12 13 ]]

            Level 1           9

   RESULTS  Level 2         8.5
            Level 1           4


Note:  If the matrix entered in level 2 has more than two
columns the excess columns will be ignored by the
program.


ROTAT

This utility performs a coordinate transformation in the
X Y plane of a set of coordinates who's X Y values are in
the first and second rows respectively of a matrix. The
resultant matrix obtained when "ROTAT" is run contains a
redefined set of coordinates with respect to a new axis
system, x y.

The program takes 4 arguments from the stack. These are:

Level 4:  Matrix of coordinates to be transformed.
Level 3:  X coordinate of origin of new axis system.
Level 2:  Y coordinate of origin of new axis system.
Level 1:  Angle of rotation in degrees of new axis
          system, x y, with respect to original X Y
          system. A positive value is a clockwise angle
          of rotation of the new axes.

The output is a 2 column matrix in stack level 1 which
contains the revised set of coordinates with respect to
the new axis system.


DRLIN

"DRLIN" is a utility which draws lines between successive
X Y coordinates in a two column matrix. In order to view
the image created by "DRLIN" additional commands are
required in the relevant programs which use "DRLIN" as a
subroutine, for example FREEZE or PVIEW. ( Refer to HP48
Owner's Manual for additional information on viewing
graphical images ). Scaling of the image also needs to
be carried out via additional commands not contained
within "DRLIN".


INVERT

"INVERT" is a stack manipulation utility which
supplements stack operations on the HP48. This utility
inverts the stack given a number of stack elements to
invert in Level 1.

Eg:  Level      INPUT     OUTPUT
                
         6        25          
         5        33        25
         4         6        17
         3         2         2
         2        17         6
         1         4        33
              

COMBIN

The "COMBIN" utility combines the columns of 2 matrices
in levels 1 and 2 of the stack such that the number of
columns in the final matrix equals the sum of the number
of columns in the original matrices, and the number of
rows remains the same. In other words the matrix
dimensions follow the rule:

{ a b } + { a c } = { a b+c }.

The two matrices to be combined must have the same number
of rows.

Eg:  INPUT    Level 2  [[ 1 7 ]       Level 1 [[ 5 ]
                        [ 2 6 ]                [ 8 ]
                        [ 3 9 ]                [ 3 ]
                        [ 4 5 ]]               [ 2 ]]

     OUTPUT   Level 1  [[ 1 7 5 ]
                        [ 2 6 8 ]
                        [ 3 9 3 ]
                        [ 4 5 2 ]]


It is also possible to use the "COMBIN" utility to
combine the rows of 2 matrices in levels 1 and 2 of the
stack. This can be achieved by transposing both of the
matrices to be combined prior to using the "COMBIN"
program and then transposing the resultant matrix.

Eg:  INPUT    Level 2  [[ 1 2 3 4 ]   Level 1 [[ 5 8 3 2 ]]
                        [ 7 6 9 5 ]]

     After             [[ 1 7 ]               [[ 5 ]
     Transposing        [ 2 6 ]                [ 8 ]
                        [ 3 9 ]                [ 3 ]
                        [ 4 5 ]]               [ 2 ]] 

     OUTPUT   Level 1  [[ 1 7 5 ]
                        [ 2 6 8 ]
                        [ 3 9 3 ]
                        [ 4 5 2 ]]

     After             [[ 1 2 3 4 ]
     Transposing        [ 7 6 9 5 ]
                        [ 5 8 3 2 ]]




Joe Muccillo
66 Prospect St
Pascoe Vale, 3044
Melbourne
AUSTRALIA

@@NTL2       SG
(Comp.sources.hp48)
Item: 103 by kalb@informatik.uni-erlangen.de
Author: [Klaus Kalb]
  Subj: ntl2 - Number Theory
  Date: Mon May 04 1992

[Man, oh man; I wish I had this when I was taking that
 Number Theory course at UCI!  -jkh-]

NTL2 is a library for the pocket calculator HP48SX to
perform basic numbertheoretic calculations. It provides
modular arithmetic, factorization of integers, chinese
remaindering and some numbertheoretic functions.

It is provided as is, without any warranty or assertion
of fitness for any purpose. Note that this library makes
use of undocumented and unsupported features of the HP48.
This might cause data loss or even hardware damage. Use
it at your own risk.

This software may be distributed freely for noncommercial
purposes, as long as this file and the documentation file
is distributed with it. You may not make any changes to
the software or to the documentation, put it into ROM,
publish it in journals or sell it without written
permission of the author.

Parts of the code in this library were developed by
Jurjen N. E. Bos and are included with his kind
permission. Thus this library contains a slightly
improved version of his `Ultimate Factoring Program'.

21/1/92, -KK
        Klaus Kalb
        Huettendorfer Weg 14
        W-8510 Fuerth 17
        Federal Republic of Germany

        email: kskalb@informatik.uni-erlangen.de
        

This is the complete documentation of the library NTL2
version 2.8. The ID of this library is 1672 and the title
reads `NTL2 2.8 Number Theory`

The command `AboutNTL2` should display the following screen:

        NTL2 2.8 Number Theory
        created 04/15/92 00:26
        (C) 1991 by Klaus Kalb
        Credits to Jurjen Bos

and return the version identification string "2.8" to
level 1.

This library attaches itself automatically to the
'HOME'-directory. Consult your HP48 Owner's Manual for
details about libraries.

The library uses a reserved variable 'Data.NTL2' to store
the currently set modulus and other information.
Contents of 'Data.NTL2':

    1 : Modulus (as entered)
    2 : Modulus (as a binary integer)
    3 : Phi(Modulus)
    4 : Lambda(Modulus)
    5 : Factorization of Modulus
    6 : Factorization of Lambda(Modulus)
    7 : Splitting for CRA               
    8 : Data for CRA                 
    9 : Splitting in Binary             



 THE LIBRARY COMMANDS 


AboutNTL2

    Displays a screen full of information about NTL2 and
    creates a temporary menu to access the various pages
    of the NTL2 menu.
    The current version ID string "2.8" is returned.
    If an entry in the temporary menu is executed with
    the string "2.8" in level 1, this string is dropped
    and the address of the author is displayed.


MSet

    Sets the modulus to the argument. All entries in
    Data.NTL2 will be destroyed.

    If the argument is a list, the entries must be
    relatively prime. In this case the modulus will be
    set to the product of all entries in the list (if
    this number is smaller then 2^63) and the chinese
    remainder support will be initialized to the given
    factorization.

    The modulus determines the representation of
    residues:

    ---    If it is positive, then the least positive
     residues will be used.

    ---    If it is negative, then the residue with the
     least absolute value will be used.

    ---    If its a binary, all residues will be
     represented as binaries.

    Note that even if the modulus has been set to a real
    number, all arithmetic will be performed with
    binaries. So be sure that the current wordsize is
    large enough.


MGet

    Retrieves the current modulus.


MTgl

    If the current modulus is a binary number less or
    equal to 10^12, the modulus will be switched to its
    negative real value.

    If the current modulus is a real number, its sign
    will be changed.

    This function allows changing the representation
    without destroying the values stored in Data.NTL2.


MPrg

    Purges Data.NTL2 from the current directory.



 BASIC MODULAR ARITHMETIC 

->Mod

    Converts its argument in the current modular
    representation Algebraics containing only global and
    local names and the basic functions +,-,*,/,INV() and
    NEG() will be evaluated.

    Any not integral reals will be converted using ->Q.

    Note that you have to be very careful about roundoffs
    when you apply ->MOD to a non integral real.

    If the argument is a list, ->MOD will be applied to
    each element of the list and the individual results
    will be combined to a new list.

    If the argument is a real array or a vector, ->MOD
    will be applied to any entry.

    stack diagram:
        #m                     r (binary or real)
        m                      r (binary or real)
        `Formula`              r (binary or real)
        { o1 o2 ... }          { r1 r2 ... }
        [ real vector ]        [ real vector ]
        [ [ real matrix ] ]    [ [ real matrix ] ]


MAdd

    Modular addition.
    Supports chinese remainder representation.

    stack diagram:
       #a #b                    a+b (binary or real)
       a b                      a+b (binary or real)
       #a b                     a+b (binary or real)
       #a #b                    a+b (binary or real)
       { a_1 ... a_n } b        { a+b_1 ... a+b_n }
       { a_1 ... a_n } #b       { a+b_1 ... a+b_n }
       #a { b_1 ... b_n }       { a+b_1 ... a+b_n }
       a { b_1 ... b_n }        { a+b_1 ... a+b_n }
       { a_1 ... } { b_1 ... }  { a+b_1 ... a+b_n }


MSub

    Modular subtraction.
    Supports chinese remainder representation.

    stack diagram:
        #a #b                    a-b (binary or real)
        a b                      a-b (binary or real)
        #a b                     a-b (binary or real)
        #a #b                    a-b (binary or real)
        { a_1 ... a_n } b        { a-b_1 ... a-b_n }
        { a_1 ... a_n } #b       { a-b_1 ... a-b_n }
        #a { b_1 ... b_n }       { a-b_1 ... a-b_n }
        a { b_1 ... b_n }        { a-b_1 ... a-b_n }
        { a_1 ... } { b_1 ... }  { a-b_1 ... a-b_n }


MMul

    Modular multiplication.
    Supports chinese remainder representation.
    Also allows multiplikation of a real vector by a
    constant

    stack diagram:
      #a #b                    a*b (binary or real)
      a b                      a*b (binary or real)
      #a b                     a*b (binary or real)
      #a #b                    a*b (binary or real)
      { a_1 ... a_n } b        { a*b_1 ... a*b_n }
      { a_1 ... a_n } #b       { a*b_1 ... a*b_n }
      #a { b_1 ... b_n }       { a*b_1 ... a*b_n }
      a { b_1 ... b_n }        { a*b_1 ... a*b_n }
      { a_1 ... } { b_1 ... }  { a_1*b_1 ... a_n*b_n }
      [ a_1 ... a_2 ] c        { c*a_1 ... c*a_n }


MDiv

    Modular division.
    Note that the divisor must be a unit.
    Supports chinese remainder representation.
    Also allows multiplikation of a real vector by a
    constant

    Note that it is more efficient to multiply the vector
    by the inverse.

    stack diagram:
       #a #b                    a/b (binary or real)
       a b                      a/b (binary or real)
       #a b                     a/b (binary or real)
       #a #b                    a/b (binary or real)
       { a_1 ... a_n } b        { a/b_1 ... a/b_n }
       { a_1 ... a_n } #b       { a/b_1 ... a/b_n }
       #a { b_1 ... b_n }       { a/b_1 ... a/b_n }
       a { b_1 ... b_n }        { a/b_1 ... a/b_n }
       { a_1 ... } { b_1 ... }  { a/b_1 ... a/b_n }
       [ a_1 ... a_2 ] c        { a_1/c ... a_n/c }


MExp

    Modular exponentation.
    Accepts negative exponents as well as positive ones.
    Supports chinese remainder representation for the
    base only.

    stack diagram:
      #b #e                      b^e (binary or real)
      b e                        b^e (binary or real)
      #b e                       b^e (binary or real)
      b #e                       b^e (binary or real)
      { b_1 ... b_n }    #e      { b^e_1 ... b^e_n }
      { b_1 ... b_n }    e       { b^e_1 ... b^e_n }


MInv

    Modular inversion.
    Calculates the multplicative inverse of the argument
    using an extended GCD algorithm.

    Supports chinese remainder representation.

    stack diagram:
       #m                         r (binary or real)
       m                          r (binary or real)
       { m_1 m_2 ... m_n }        { r_1 r_2 ... r_n }


MNeg

    Multiplies its argument with -1.
    Supports chinese remainder representation.

    stack diagram:
       #m                         r (binary or real)
       m                          r (binary or real)
       { m_1 m_2 ... m_n }        { r_1 r_2 ... r_n }



 ADVANCED MODULAR ARITHMETIC 


M

    Extracts the square root of its argument (if
    possible)

    Needs an odd modulus.
    This needs the 'standard' chinese remainder setup.

    stack diagram:
        #x               r (binary or real)
        x                r (binary or real)


MSq?

    Checks wether its argument is a square modulo the
    current modulus. Needs an odd modulus.

    stack diagram:
        #x               0/1
        x                0/1


MOrd
    Calculates the order of the given element.
    
    stack diagram:
        #x               p (binary or real)
        x                p (binary or real)


MNPr

    Returns the next primitive element modulo the current
    modulus. If there are no primitive elements, an error
    will be generated.

    stack diagram:
        #x               p (binary or real)
        x                p (binary or real)


MPr?

    Returns 1 if its argument is a primitive element
    modulo the current modulus.

    stack diagram:
        #x               0/1
        x                0/1



 CHINESE REMAINDER TECHNIQUES 


CSet

    Sets the chinese remainder mode.
    The argument is a list of pairwise relatively prime
    integers. Note that binaries and integral reals
    (positive and negative) are allowed in the input
    list.

    If a modulus is set, the product of all numbers in
    the list must match the modulus. If not, the modulus
    is set to this product as a binary integer if this
    product can be represented in 63 bits.

    stack diagram:
           { r_1 ... }               


CGet

    recalls the current chinese remainder setting to the
    stack

    stack diagram:
                         { r_1 ... }


->Crm

    Converts a number into the current chinese remainder
    representation. The result will be a list of residues
    according to the call to CSet. If CSet hasn't been
    called, the current modulus will be factored and a
    default setting will be made up.

    stack diagram:
        m                { r_1 ... }
        #m               { r_1 ... }


Crm->

    Converts from the chineses remainder representation
    back to the normal modular representation.

    This is done by the well known chinese remainder
    algorithm. The input must be a list of real or binary
    integers

    stack diagram:
        { r_1 ... }          m (binary or real)



 BASIC NUMBER THEORY ROUTINES 


BGcd

    Calculates the greatest common divisor of two
    numbers.

    This is done by a binary algorithm programmed in ML

    stack diagram:
        #a #b            #d
        a b              d
        #a b             #d
        a #b             #d


XGcd

    Calculates the greatest common divisor of two numbers
    and gives coefficients for a representation of the
    gcd as linear combination of the arguments.
        gcd(a,b) = d = t*b-s*a with 0<=t<=a and 0<=s<b
    Both arguments must be positive.

    stack diagram:
        #a #b            #s #t #d
        a b              s t d


Lcm

    Calculates the least common multiple of two numbers.
    The result will always be positive.
    BUGS: an overflow isn't detected

    stack diagram:
        #a #b            #r
        a #b             #r
        #a b             #r
        a b              r



 NUMBER THEORY FUNCTIONS 

Ntf

    Calculates the Euler totient function.
    If the argument is a list, the entries must be in
    ascending order.

    stack diagram:
           #m                     #Phi(m)
           m                      Phi(m)
           { p_1 ... p_n }        Phi(p_1 * ... * p_n)


Ntf\Gl

    Calculates the Charmichael's Lambda function
    The Charmichael Lambda function gives the exponent of
    the group of units modulo m.
    If the argument is a list, the entries must be in
    ascending order.

    stack diagram:
        #m                     #Lambda(m)
        m                      Lambda(m)
        { p_1 ... p_n }        Lambda(p_1 * ... * p_n)


Ntf

    Calculates the number of divisors of the argument.
    If the argument is a list, the entries must be in
    ascending order.

    stack diagram:
        #m                     #Sigma(m)
        m                      Sigma(m)
        { p_1 ... p_n }        Sigma(p_1 * ... * p_n)


Ntf

    Calculates the Moebius function
    If the argument is a list, the entries must be in
    ascending order.

    stack diagram:
        #m                     #Mu(m)
        m                      Mu(m)
        { p_1 ... p_n }        Mu(p_1 * ... * p_n)


Jacobi

    Calculates the Jacobi symbol for its arguments using
    the quadratic reciprocity law. Note that the value
    is zero if the arguments aren't relatively prime
    and that the value in level 1 must be an odd number.
    The result will always equal +1, -1 or 0.

    stack diagram:
      a b              (a/b)
      #a #b            (a/b)
      a #b             (a/b)
      #a b             (a/b)



 PRIMALITY TESTING AND FACTORING 

[Note: These four functions are available in a separate
 smaller library called FCTR.LIB on Goodies Disk #8.
 -jkh-]

Prm?

    Tests whether its argument is a prime number
    Returns 1 if it's a prime, else 0
    If the number to test is greater then 25*10^9, the
    test may take up to several minutes.
    Negative primes are recognized correctly

    stack diagram:
        #m               0/1
        m                0/1


NPrm

    Finds the smallest prime greater than the absolute
    value of the argument

    stack diagram:
        #x               #p
        x                p


PPrm

    Finds the greatest prime less than the absolute value
    of the argument

    stack diagram:
        #x               #p
        x                p


Factor

    Factors its argument into primes.

    The output will be a list containing all prime
    divisors (repeated according to their multiplicity)
    in ascending order.

    The entries in the list will have the same type as
    the argument.

    The number to be factored must be positive.

    stack diagram:
           #x       { #p_1 #p_2 ... }
           x        { p_1 p_2 ... }



 ADDITIONAL MODULUS ARITHMETIC 


MRand

    Generates a random number modulo the current modulus.

    stack diagram:
         r (binary or real)



 MENU HANDLING 


MenuNTL2

    Displays a temporary menu to access the various pages
    of the NTL2 library menu. This gives access to NTL2
    similar to the built-in menu keys like MTH.
@@PFIT       SG
          ͻ
           POLYNOMIAL CURVE FITTING REVISITED 
                   by Joseph K. Horn          
          ͼ

Included as an example program in Jim Donnelly's book,
"The HP 48 Handbook, 2nd Edition" (pg. 83-85).

[Note: This is a mathematically-exact fit program, not a
 statistical "best fit". For statistical work, see
 CUFIT. -jkh-]

David Kurtz's 'POLYFIT' program on Goodies Disk #6 is
2,256 bytes long, and takes 1.5 hours to fit 15 data
points to a polynomial exactly (not by least-squares
approximation). Here's a program that does the same
task, but with better accuracy, in 197 bytes, in 28
seconds!  And what's really cool, there is another
program included here that does a pseudo scatter plot
with large, easily visible points, and then, on top of
that, plots the polynomial that fits those points, with
labeled axes.

The 'PFIT' program takes the points' coordinates as its
input (in matrix form), and gives back the coefficients
of the polynomial as its output (in array form).

The 'SHOP' program also takes the points' coordinates as
its input, but does a kind of scatter plot (using arrow
grobs instead of single pixels), and then connects the
points with the polynomial generated by PFIT. Note: SHOP
creates then purges 'DAT', 'PPAR', and 'EQ' in the
current directory.

'PFIT' EXAMPLE: What polynomial passes through (-2,-3),
(0,7), (1,6), and (2,9)?

METHOD: Press P1 and see the proper input for this
example:

        [[ -2 -3 ]
         [  0  7 ]
         [  1  6 ]
         [  2  9 ]]

Then run PFIT (Polynomial Fit). See:

        [ 1 -1 -1 7 ]

These are the coefficients of the polynomial answer; thus
x^3-x^2-x+7=0 passes through those points.

Note: Leading zeros (or leading values very close to
zero) mean that you supplied more points than necessary.
Ignore them.

'SHOP' EXAMPLE: What do those points and that polynomial
LOOK like?

METHOD: Press P1, then run SHOP (Show Polynomial). See
the points, then the polynomial, and then labeled axes
get displayed. Press ON to exit graphics mode, or press
GRAPH (PICTURE on the G/GX) to exit scrolling mode.

'P2' is another sample input. Try it with PFIT and SHOP.
@@POLY46SX   S-
BEGIN_DOC_POLY_version_4.6sx

                                 Gothenburg, October 1993

Hello everybody!


Preludium to the former preludium :


I was wrong !  Here's yet another version ... but it's
well worth it ;-) Please, re-read this document, because
new features are implemented !! Of course, now, the
possibilities of making yet another version are
extremelly small, since I don't have the '48 anymore ...
:-(

Preludium to this last version ever (coming from me) :

(that was v4.4 ...:)

From the beginnig, I wanted to make a short and fast
Libarary to deal with polynoms. I did several versions,
and versions 3.3 & 3.5 were quite stable, so they were
released to the net. Then I had the need of developing
the product to have the biggest posible degree of
accuracy in the market, without loosing speed. Versions
4.x were released to the net. Stable, I had thought,
until I got lots of letters containg all from bug-reports
to whishes of new features, etc. Now from all the
enormous response this library has had among its users, I
could develop this package into a very stable version -
the last reports were about small things ...  It was you
as a user who helped me and I thank you very much for
that. Also, you got yourself a better program now. And
how about the initial objectives ? Well, the lib sure has
grown a lot, but that's the only way of trapping more &
more weird cases, and thus, making it as safe as
possible. [...more stuff deleted...]

Changes with respect to version 4.2 :


1) A reported bug from versions 4.X was corrected - a
   comma was missing in an internal Long Real, so for
   some impredictable polynoms it would ask you to
   "Please, report a bug!", and for others it would run
   forever .

2) Acording to many letters wanting some changes in the
   output of the PDIV and PFACT, they now have a
   different way of presenting the results, just to make
   it easy for you to use those routines from a program.

3) Other bugs non-reported were fixed as well, like the
[1] { ... } PF, etc


Changes with respect to version 4.4 :


1) The biggest change for you as a user: PF has become
   really user-friendly! Now it's perharps the most
   complete Partial Fraction Decomposition program in the
   market !! (see the doc about that)

2) The biggest change for you who like to compare
   execution times: thanks to new Assembly routines in
   vital places, the speed of some routines (like ROOTS)
   has almost doubled !! (see the thanks part about that)

3) PDIV, PMULT, PADD and RDER won't error with 'zero
   polynoms' anymore ... (see the error part of the doc)

Changes with respect to version 4.5 :


1) A bug that *everybody* reported in PF is fixed. Also,
   the function is improved. Re-read that part of the
   manual.


Copyright (c) 1993  Carlos Ferraro except :

L%SQRT.asm    Kindly donated by Mario Mikocevic & Mika
Heiskanen

L%FLEVAL.asm  Kindly donated by Mika Heiskanen

All Rights Reserved


DISCLAIM :

POLY 4.6sx and this manual are presented as is, without
warranties, expressed or implied. The author makes no
guarantee as to the fitness of this software.

POLY 4.6sx can be copied freely,  provided the software,
including this manual, is copied in its entirety. The
user cannot be charged, in whole or in part, except for
the cost of reproduction. No part of this package may be
used for commercial purposes without written permission
from the author.

POLY 4.6sx uses a lot of unsupported entries, but it's
tested succesfully in versions E and J . If you have
lower versions, you'd better back up everything before
trying it. If you find bugs or possible missbehavings
please let me know at once .



This is a very fast library (full sys-rpl & assembly !)
to deal with the fundamental polynom necessities:

   ROOTS (all roots of polynom)
   PDIV (polynom synthetic division)
   PMULT (polynom multiplication) 
   PADD (polynom addition)
   PEVAL (polynom evaluation at a point)
   PDER (polynom derivation)
   PF (partial fraction decomposition)
   R->P (polynom builder - having the roots)
   PFACT (polynom primitive factors)
   ALG-> (algebraic to polynom)
   ->AP (the opposite of above)

IMPORTANT : Unlike other polynom utilities I use vectors
instead of lists to represent polynoms. This is only done
for ensuring total control of the types of the
input-elements, since the internal representation of the
polynoms are lists of L% (this means Long
Reals/Complexes).

For those who aren't familiar with non-symbolic polynom
representation:

Example:

x^4 + 2*x^3 - 4*x = [ 1 2 0 -4 0 ]

Example:

7*x^3 - 5*x^2*i + 3 + 2*i = [ (7,0) (0,-5) (0,0) (3,2) ]

All functions perform automatic deflation of leading
zeros in the representation.

Error Messages :


1) In case you do find some weird polynom that makes
   ROOTS crash, it will ask you to "Please report a bug!"
   This should be quite difficult with this version
   thouh, due to many more testing of all kinds of weird
   cases, as well as a totally new, sofisticated method
   for initializing the numerical computations...

2) In case you do put 2 polynoms lineraly dependent as
   argument for RDER it will tell you that it's a
   "Constant Rational"

3) In case you input an algebraic which can't be
   converted into a polynom using the McLauring series,
   as an argument for ALG-> , it will say exactly where
   in the calculations the error occurred. It often
   happends if the algebraic or its derivatives are not
   defined at x=0

4) ROOTS will complain with any (deflated) polynom of
   degree less than one - because that's nothing else
   than a constant!

5) In case you input a list containing not ONLY Long
   Reals OR Long Complexes as argument for ROOTS it will
   error with "Bad Argument Type"

6) In case you imput a polynom as denominator for PF,
   which contains one multiple 2nd degree primitive,
   it'll error with "Case Not Allowed Yet" The algoritm
   simply isn't defined for those cases. If somebody
   sends me a better algoritm, that'll be reason enough
   to release a new version ;)


The programs :


1) ROOTS 
takes all the roots (real or complex) of real or complex
polynoms. For polynoms of degree 1 & 2, it uses the
corresponding algebraic formulas. For degree 3 and above,
it uses Laguerres method to come close to the root, and
then a variation of Newtons method using the 2nd
derivative as well, for quick and safe convergence to the
root. A trial to round every element of the resulting
polynoms, as well as every root - taking the real and
imaginary parts of the complex numbers separatelly - is
made. If a complex root is found to have a (rounded)
imaginary part equal zero, it will be simply displayed as
a real ! Yet, this could be undesired when looking for
roots known to be close to (but not exact) integers.
(example: (x + 000.1)^3 ) Simply set UserFlag 4 and no
rounding will be performed at any stage of the
calculations.

ROOTS is both faster and more accurate than before - at
least one decimal more accurate - 10 decimals accuracy is
what you get for sure.

NOTE !  A new feature in ROOTS : It accepts { L% } - but
        not { F% } ( it controls so that all elements are
        either all %% or all C%% :) - and it takes the
        roots of a polynom represented that way ! This is
        specially interesting after having used the
        hidden features, like multiplication, etc, to get
        the roots with a higher degree of accuracy than
        usual. Observe that the answer is { L% } in this
        case !

Input:
1: [poly] or {poly_L%}
Output:
1: {roots} or {roots_L%%}

EXAMPLE: Taking the roots of [ 4 33 68 15 ] :
the answer { -3 -5 -.25 }
comes in ~ 0.87 seconds - faster than PROOT

EXAMPLE: Taking the roots of [ 1 3 2 -1 -3 -2 ]  gives 
the answer { 1
           (-.5 , -.866025403783)
           (-.5 , .866025403783)
           -2 -1 }
           in ~ 3.5 seconds - faster than PROOT

EXAMPLE: Taking the roots of [ (4,0) (33,1) (76,5) (57,0)
(10,0) ] gives the answer
 
{ (-2,10855823443;-,490913020551) 
-5
(-,892806137173;,252602578667)
(-,248635628397;-1,16895581159E-2)
}
after ~  6.5 seconds
Observe that the coeffitients of the polynom are complex!

EXAMPLE: Taking the roots of [ 1 21 183 847 2196 3024
1728 ] :

the answer { -4 -4 -4 -3 -3 -3 }

comes in  ~ 3.3 seconds - much faster than PROOT

EXAMPLE: Taking the roots of [ 1 1.533333333333 -3.15
-1.933333333333 -.25 ] :

the answer { 1.5 -2.5 -.3333333333333 -.2  }

comes in  ~ 2.0 seconds

2) PDIV - synthetic polynom division, or dividing a
polynom by a number.

Input:
2: [poly]
1: [poly] or % (that is, a real) or C% (that is, a
complex)

It will now ALWAYS return the rest at level 2 , just to
make it easier for you to use this routine in an own
program !

Output:
2: [resting poly] 
1: [poly]

3) PMULT - multiplication of 2 polynoms
Input:
2: [poly]
1: [poly]
Output:
1: [poly]

4) PADD - guess what ...
Input:
2: [poly]
1: [poly]
Output:
1: [poly]

5) PEVAL - evaluation of a polynom at a point (horner's
scheme)

Input:
2: [poly]
1: % or C%
Output:
1: % or C%

6) PDER - derivation of a polynom
Input:
1: [poly]
Output:
1: [poly]

7) PF - Partial Fractions expansion Now it supports 2nd
        degree forms in the denominator, except for the
        case named in the Error part. It supports 2
        different ways of input : as before, with output
        as before, or simply the rational polynom as
        you'd represent it for RDER. That is :

Input  (modified "old way"):
2: [poly]              (the polynom corresponding to the
                        nominator!)

1: { list of roots }   (roots of the polynom
                        corresponding to the denominator!)

Output (old way):
2: { sorted list of roots }  ( now sorted in ascending
                               order , C% last )

1: { list of coefficients of the expansion, in order with
     the roots}

OR

Input  (new way):
2: [poly]
1: [poly]
Output (new way):
3:   ...         ( ... = all the necessary stack levels)
2: { ob [poly] }
1: { ob [poly] } (where ob is either a real or a polynom)

Where ob means the nominator, which can be a real or a
polynom, and [poly] means the denominator resulting from
the Partial Fraction decomposition of the rational
polynom !

IMPORTANT: the order in which the lists appear in the
           stack is significant for the multiple terms !
           That is, whenever you see 2 or more equal
           terms in the denominator, the 1st means the
           lineal term, the 2nd means the quadratic term,
           etc, etc. Still don't get it ? Watch the
           examples below and review you Calculus books
           ... ;-) And the same is for the order of the
           elements of the list in case you use the old
           way! I've heard some Electrical Engineers
           prefer the old way of inputting because of
           some of their formulas (?) Simply hit EVAL
           when you see the list, and the order of the
           elements in the stack has the same
           significance as for the new way.

NEW FEATURES: When imputing like the 'old way' , the
              elements in the list are automatically
              sorted in increasing order for the real
              elements, and the same for complex
              elements. No sort is needed between real
              and complex elements, so these ones come
              last always. So, for your convenience, I
              put the sorted list at level 2, so that you
              can see the correct correspondance between
              the roots and their coeffitients.
          
EXAMPLES of the new way :


Input:
2: [1]
1: [1 2 2 2 1]

Output:                (After 4.27 seconds)

3: { [-.5 0] [1 0 1] }
2: { .5 [ 1 1] }     ( [1 1] representing now (x+1)^2 )
1: { .5 [ 1 1] }     ( [1 1] representing simply (x+1) )

And so that means that :
'1/(x^4+2*x^2+2*x^2+2*x+1)'  =
'1/(2*(x+1))+1/(2*(x+1)^2)-x/(x^2+1)'

Input:
2: [1 -2 2]
1: [1 -3 2 0]

Output:                (After 1 second)

3: { 1 [1 0] }
2: { -1 [ 1 -1] }
1: { 1 [ 1 -2] }

And so that means that :
'(x^2-2*x+2)/(x^3-3*x^2+2*x)' = '1/(x-2)-1/(x-1)+1/x'

Input:
2: [1]
1: [1 0 0 1]

Output:                (After 3 seconds)

2: { [-.3333333333 .66666666666] [ 1 -1 1] }
1: { .3333333333 [ 1 1] }

And so that means that :
'1/(x^3+1)' = '1/(3*(x+1))-(x-2)/(3*(x^2-x+1))'

EXAMPLE of the old way :


2: [1 5]
1: [1 -6 11 -6]   ROOTS    { 2 1 3 }
So that the input is :
2: [1 5]
1: { 2 1 3 }            ( the roots list )
Output:
2: { 1 2 3 }            ( the roots list, now sorted )
1: { 3 -7 4 }
And so that means that :
'(x+5)/(x^3-6*x^2+11*x-6)'  = '3/(x-1)-7/(x-2)+4/(x-3)'


8) R->P - Roots into Polynom, actually the invers of
ROOTS

Input:
1: {list of roots}
Output:
1: [poly]

9) PFACT - Polynom FACTorization in primitive polynoms -
     that is, in linear polynoms, or in polynoms of 2nd
     degree having only complex roots..

Input:
1: [poly]
Output:
1: { [poly] ... [poly] } 

Meaning that if the polynom is factorizable, the
resulting primitives will now be placed in a list, just
to make it easier for you to use this routine in an own
program !

EXAMPLE:
Input:
1: [ 1 -6 11 -6 ]
Output:               ( In 0.8 seconds )
1: { [ 1 -1 ] [1 -2 ] [ 1 -3 ] }

10) RDER - Rational DERivation, that is, derivation of a
           rational polynom. Note: see the error part of
           the document.

Input:
2: [poly]
1: [poly]
Output:
2: [poly]
1: [poly]

EXAMPLE:
Input:
2: [ 1 -2 ]
1: [ 1 -3 ]
Output:              ( In 0.4 seconds )
2: [ -1 ]
1: [ 1 -6 9 ]

11) ALG->   - Algebraic polynom/function into one
              polynom-vector-representation (may be slow
              for compicated algebraic expressions) Uses
              McLaurin expansion, so not everything will
              be possible to convert into a polynom, for
              example, non-analytical functions at origo
              will error! (see Error part of the document
              ! ) Note: Flag -3 is only temporaly cleared
              if set, but left unchaged !!

Input:
3: algebraic poly
2: the bounded variable
1: an integer = the degree of the polynom  
  (this number may be higher than necessary if the
  algebraic is a polynom)

Output:
1: [poly]     ( deflated of leading zeros if possible )

EXAMPLE:
Input:
3: 'Y-1'
2: 'Y'
1: 1
Output:		( In 0.39 seconds )
1: [ 1 -1 ]

12) ->AP  -  polynom into Algebraic Polynom
             Note: Flag -3 is only temporaly cleared if
             set, but left unchaged !!

             Note2: The variable 'X' is purged if it
             contains anything !!

Input:
1: [poly]
Output:
1: algebraic poly in the variable 'X' .

advanced_users-advanced_users-advanced_users-advanced_use


The commands described below are invisible but typeable -
that is, you can't see them in the menu, but you can use
them by writting their name. Why did I do so ?! There's
one very special reason : those commands are very usefull
for the more advanced user, wanting to perform i.e. a
series of multiplications *without* loosing the accuracy
that transforming the internal representation of the
polynoms back-and-forth into real vectors causes, nor
loosing any speed ! The ideal cases are multiplication or
division of polynoms containing very big or small
numbers. Be aware of the following :

NO ERROR CHECKING IS PERFORMED IN THESE INTERNAL
ROUTINES. USE THEM WITH CARE OR THEY'LL BLOW OUT YOUR
MEMORY !!!

No stripping of leading zeros is done automatically by
these functions ! Remember, they're only released as
*extra* features, the package itself is safe and contains
already all you need ;-)


13) A->FL - Array [ F% ] to (long-)Floating-point List {
L% }

14) FL->A - (long-)Floating-point List - { L% } List to
Array [ F% ]

15) FL-L - (long-)Floating-point List { L% } to List (of
normal floats) { F% }
                  
16) FLDIV - Division of a { L% } by { L% } . Returns { L%
            } at level 1 & 2 .

17) FLMUL - Multiplication of a { L% } by { L% }

18) FLADD - Addition of a { L% } with a { L% }

19) FLPF  - Partial Fraction of { L% } and { L% } . Note:
            level 2 represents poly, level 1 represents
            list of roots !

20) R->FL -  (  { L% }  { L% }  ) (roots to polynom)

21) FLRDER - Rational derivative of { L% } { L% } 

22) FLFACT - Factorizing { L% } .
             Note; input demands a NULL{} at level 2 !
             Returns list of all the factors as { { L% }
             ...{ L% } } .

23) MULEL  - Multiplication of a { L% } by a L% .
             Note: Input 2: L%
                         1: { L% }   !!
                   Output: the elements in the stack  !!

24) DIVEL - Division of a { L% } by a L% . Note: input
            and output as MULEL  !!

25) STRIP0 - Strip all leading 0 (zeros) of a { B% } ,
             that is both { F% } or { L% } Note: the last
             letter of the name is a zero !

ROMPTR 4C8 19 (L%%SQRT) is pure assembly for taking the
              roots of a L% Developed by Mario & Mika

ROMPTR 4C8 24 (L%FLEVAL ) is pure assembly for evaluating
              a { L% } at L% Developed by Mika


FOR YOU WHO AREN'T USED TO LIBRARIES :


INSTRUCTIONS FOR DOWNLOADING:
1) send POLY4_6.LIB (change name if you have comma as
radix !) to the 48 (binary mode)
2) call the contents of POLYLIB to the stack
3) purge  POLY4_6.LIB  (optional)
4) enter 0 or 1 or 2 (corresponding to the desired port)
and hit STO
5) turn the calculator OFF
6) turn the calculator ON

The library is now installed !
If (3) wasn't done, do it now (to save bytes)

INSTRUCCIONS TO PURGE THE LIBRARIES:
Althought I can't imagine why anybody would like to purge
this beauty, here it is:
:&:1224 DUP DETACH PURGE


Once the lib is installed, the values are :
Library POLY , LID 1224 ,  bytes 5422.5 , cheksum #239Ah .
(Observe that this values are from taking "bytes" when
the stack shows : Library 1224 : POLY )

This package has been almost entirelly written on the 48
itself, using:

From Detlef Mueller and Raymond Hellstern : <-RPL-> 4.2 ,
<-LIB-> 1.8

From Detlef Mueller : Several utilities

From Mika Heiskanen : SDBG 1.0b, MKROM 1.0b, SADHP 1.05g
(this one on Unix :)

From Fatri & me : StringWriter 3.12

Many thanks to ...


... Detlef Mueller for his friendship, for the skeleton
    of the abs-lib (not used yet), for opinions, tricks,
    etc

... Mika Heiskanen for %%RND, L%SQRT and L%FLEVAL in
    Assembly, his outstanding on-line-service for his
    programs, as well as for many suggestions about L%,
    tricks, etc

... Mario Mikocevic for L%SQRT in Assembly

... Peter Larsson from whom I borrowed a HP48SX for
    almost a week so that I could develop this last
    version 4.6 and the GX versions of my other libs

... Joe Horn for the kernel of the sorting algorithms I
    use in this version

... everybody - more than I ever expected ! - who sent me
    a mail of appreciation for POLY 3.xx and POLY 4.xx

... everybody who reported bugs in all versions, as well
    as all of you who asked for little improvements here
    and there, even when I read your mail *after* having
    produced those versions ... :)

Any questions? Feel free to mail me!

 carlos@dd.chalmers.se  Computer Logics Ŀ
                                                       
 Address:                            Phone:            
 Carlos Ferraro                      (+46 31) 7750549  
 Godhemsgatan 60 A                                     
 414 68 Gothenburg                        o o          
 SWEDEN                                  \___/         
                                                       

@@QRAD       SG
QRAD, Quotient of RADicals, by Joe Horn.
Uses SSQR. See SSQR for full documentation.
@@SSQR       SG
SSQR (Simplified SQuare Root) and
QRAD (Quotient of RADicals)
     by Joseph K. Horn

                         ͻ
                          SSQR 
                         ͼ

shecters@ucunix.san.uc.edu [Robb Shecter] writes:

> Does anyone know of an algorithm to simplify something
> like 128 and get 8*2 as the result?

The following 'SSQR' program requires the FCTR library by
Klaus Kalb, which is totally awesome and should be in
every HP48 (it works in HP48 version A-P). It's the
fastest 48 factorizer in the world. It's available on
Goodies Disk #8.

INSTRUCTIONS: To get the simplified square root of N,
type N (not its square root) and press SSQR ("Simplified
SQuare Root").

EXAMPLES:

   128 SSQR    '8*2' (Robb's example)
123456 SSQR    '8*1929' in .6 sec (on a 48G)
    22 SSQR    '22' (composite but no square factor)
    23 SSQR    '23' (prime)
    24 SSQR    '2*6' (composite with square factor)
    25 SSQR    5 (perfect square)

                         ͻ
                          QRAD 
                         ͼ

Related to this is the problem of turning a decimal
number into a ratio of square roots and/or integers. The
following 'QRAD' program calls 'SSQR' and therefore
requires the FCTR library and automatically simplifies.

INSTRUCTIONS: To turn an unrecognizable decimal (or
unsimplified algebraic ratio) into a simplified ratio of
square roots, run QRAD ("Quotient of RADicals"). Returns
good results if the input is a ratio of roots, or a ratio
of a root and an integer, or a ratio of integers.

EXAMPLES:

.217005559243 QRAD    '17/19' (radical/integer)
.945905302928 QRAD    '17/19' (radical/radical)
3.9000674758  QRAD    '17/19' (integer/radical)
.894736842105 QRAD    '17/19' (integer/integer)
11.313708499  QRAD    '8*2'  (Robb's example)
'123*456'    QRAD    '246*114' (simplification)

Students would *kill* for this program, so don't tell 'em
you have it.  -jkh-
@@SIMPLEX2   SG
(Comp.sys.hp48)
Item: 2940 by kevin@rodin.wustl.edu [Kevin Ruland]
Subj: Simplex v1.2 library
Date: 04 Feb 1993


Simplex Library v1.2                               2-4-93


This library is used to solve linear programming problems
using the 2 phase simplex algorithm. I find it very
fast, it can solve a 8 variables by 5 constraints problem
in less than 7 seconds performing 5 iterations on the
tableau. This is over 50% better than the original
userRPL/sysRPL version.

The problem this program is intended to solve is of the
form,

     min    c.x
     st     A.x =  b    (* b must be greater than zero *)
              x >= 0

The way the problem is entered is as a matrix of the
following form,

             [ c |  0 ]
             [---+----]
             [ A |  b ]

For examples to solve the following problem

           max x1 + 2 x2 + x3
           st
               x1 + x2      <= 3
               x1 - x2 + x3 <= 3
                    x2 + x3 <= 3
               x1 + x2 + x3 >= 3
                    x2      <= 2
               x1, x2, x3 >=0

We convert it to the standard form

           min - x1 - 2 x2 - x3
           st
               x1 + x2      + x4                     = 3
               x1 - x2 + x3      + x5                = 3
                    x2 + x3           + x6           = 3
               x1 + x2 + x3                - x7      = 3
                    x2                          + x8 = 2
               x1, x2, x3 >= 0

And write it as a matrix to enter it in the program.

          [[ -1 -2 -1 0 0 0  0 0 0 ]
           [  1  1  0 1 0 0  0 0 3 ]
           [  1 -1  1 0 1 0  0 0 3 ]
           [  0  1  1 0 0 1  0 0 3 ]
           [  1  1  1 0 0 0 -1 0 3 ]
           [  0  1  0 0 0 0  0 1 2 ]]

To solve the problem right away, just enter the matrix,
press the [INIT] key which initializes the variable
'LPdat' in the current directory. Then press the [SOLVE]
key which iterates and gives the solution in three parts
as follows,

           3:  :\Gl: [ -1 0 -1 0 0 ]
           2:  :Z: -6
           1:  :X: [ 2 .999999999998 2 0 0 0 2 1 ]

Where x is the actual point that minimizes the problem,
the \Gl (lambda) is the solution to the dual problem or
shadow prices of each of the constraints, and Z is the
objective function value at the point found to be the
minimum.

This is the end of the quick start procedure to solve
Linear Programs (LP) with my program. Now lets go to
some details...

The solution of an LP can be classified into,

     infeasible, when there is no point satisfying the
                 set of constraints

     unbounded, when the set of constraints can not
                restrict the decrease of the objective
                function (c.x)

     feasible & bounded, when there is a solution to the
                         set of constraints which
                         minimizes the objective function
                         value.

The problem can handle all three cases, giving the
solution when it is feasible and bounded, providing a ray
(which is a vector through which you can infinitely
decrease the objective function while remaining in the
feasible region) when the solution is unbounded, or
flagging that there is no feasible solution.

To achieve the different solutions, the program goes
through the two phase simplex method automatically (you
don't have to include artificial variables). Phase 1 is
used to derive a starting basic feasible solution by
introducing a set of artificial variables with the
objective of minimizing its sum; when this is achieved,
the set of artificial variables is removed and Phase 2
starts with the original objective function. (Refer to
any LP book for more details into the 2 phase simplex
method.)  One important thing to notice, is that the
columns corresponding to the artificial variables,
although not taken into account on phase 2, are not
removed from the problem because it is sometimes useful
to have these columns at the end of the problem in order
to perform sensitivity analysis.

The basic algorithm that I used to solve the problem is a
pseudo revised simplex method, where all that is
maintained is the inverse of the base, the set of basic
and non basic variables, and the rest of the data
provided at the initialization process. I think this is
the most efficient procedure in terms of time because it
requires less computations...

Here's a brief explanation of what the library's commands
do.

  [INIT]  Initializes the set of variables that are
          needed by the program.
  [TABL]  Extract the current tableau from the problem.
  [SOLU]  Extracts the solution from the current tableau.
  [SRAY]  Extracts a ray from an unbounded LP.
  [TRACE] Performs one iteration at a time, it is
          equivalent to solve, but you can take a look at
          the partial tableaus that are generated by the
          procedure.
  [SOLVE] Gives the answer to the LP if it exists.
  [SLACK] ( 1: array  1: array) Adds a slack variable
          to each constraint ( 2: array 1: list  1:
          array) Adds a slack variable to each constraint
          in list. Constraints are numbered from 1
          (meaning the second row of the matrix is the
          first constraint).
  [NEWV]  ( 3: problem (m+1xn+1 array) 2: consts of new
          vars (1xp array)
            1: constraint coeffs of new vars (pxm array)
            1: new problem (m+1xn+p+1 array) )
          Adds variables to a problem.
  [ABOUT] Minimalist "about" screen.
@@NUM        SG
NUM, a toolkit for heavy math.  Menu keys:

     [SOLV]  Solving Equation
     [SYST]  Solving System of Equations
     [INTG]  Integrating Function
     [DIFF]  Computing Differential Equation
     [ASIM]  Continual Analogical Simulation
     [IPOL]  Interpolation
     [CFIT]  Curve Fitting
     [TRANS] Transforming Coordinates
     [REV]  Reversing DAT Matrix
     [ORD]  Ordering DAT Matrix
     [DRW]  Plotting DAT Matrix (autoscaled)
     [SCL]  Plotting DAT Matrix (scale plot)
     [LIN]  Connecting scattered Points by Lines

The actual documentation file is over 55K, much too large
to fit this DOC viewer.  Go into the MATH subdirectory on
this disk, copy NUM.EXE onto your PC's hard disk, and run
it there.  It'll create NUM.DOC which you can then view
with LIST.COM; I recommend printing a hard copy.
